Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Med Internet Res ; 26: e48168, 2024 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-38412023

RESUMEN

BACKGROUND: Conversational agents (CAs) or chatbots are computer programs that mimic human conversation. They have the potential to improve access to mental health interventions through automated, scalable, and personalized delivery of psychotherapeutic content. However, digital health interventions, including those delivered by CAs, often have high attrition rates. Identifying the factors associated with attrition is critical to improving future clinical trials. OBJECTIVE: This review aims to estimate the overall and differential rates of attrition in CA-delivered mental health interventions (CA interventions), evaluate the impact of study design and intervention-related aspects on attrition, and describe study design features aimed at reducing or mitigating study attrition. METHODS: We searched PubMed, Embase (Ovid), PsycINFO (Ovid), Cochrane Central Register of Controlled Trials, and Web of Science, and conducted a gray literature search on Google Scholar in June 2022. We included randomized controlled trials that compared CA interventions against control groups and excluded studies that lasted for 1 session only and used Wizard of Oz interventions. We also assessed the risk of bias in the included studies using the Cochrane Risk of Bias Tool 2.0. Random-effects proportional meta-analysis was applied to calculate the pooled dropout rates in the intervention groups. Random-effects meta-analysis was used to compare the attrition rate in the intervention groups with that in the control groups. We used a narrative review to summarize the findings. RESULTS: The systematic search retrieved 4566 records from peer-reviewed databases and citation searches, of which 41 (0.90%) randomized controlled trials met the inclusion criteria. The meta-analytic overall attrition rate in the intervention group was 21.84% (95% CI 16.74%-27.36%; I2=94%). Short-term studies that lasted ≤8 weeks showed a lower attrition rate (18.05%, 95% CI 9.91%- 27.76%; I2=94.6%) than long-term studies that lasted >8 weeks (26.59%, 95% CI 20.09%-33.63%; I2=93.89%). Intervention group participants were more likely to attrit than control group participants for short-term (log odds ratio 1.22, 95% CI 0.99-1.50; I2=21.89%) and long-term studies (log odds ratio 1.33, 95% CI 1.08-1.65; I2=49.43%). Intervention-related characteristics associated with higher attrition include stand-alone CA interventions without human support, not having a symptom tracker feature, no visual representation of the CA, and comparing CA interventions with waitlist controls. No participant-level factor reliably predicted attrition. CONCLUSIONS: Our results indicated that approximately one-fifth of the participants will drop out from CA interventions in short-term studies. High heterogeneities made it difficult to generalize the findings. Our results suggested that future CA interventions should adopt a blended design with human support, use symptom tracking, compare CA intervention groups against active controls rather than waitlist controls, and include a visual representation of the CA to reduce the attrition rate. TRIAL REGISTRATION: PROSPERO International Prospective Register of Systematic Reviews CRD42022341415; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022341415.


Asunto(s)
Comunicación , Salud Mental , Humanos , Bases de Datos Factuales , Salud Digital , Literatura Gris
2.
J Med Internet Res ; 25: e50767, 2023 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-37910153

RESUMEN

BACKGROUND: Conversational agents (CAs), or chatbots, are computer programs that simulate conversations with humans. The use of CAs in health care settings is recent and rapidly increasing, which often translates to poor reporting of the CA development and evaluation processes and unreliable research findings. We developed and published a conceptual framework, designing, developing, evaluating, and implementing a smartphone-delivered, rule-based conversational agent (DISCOVER), consisting of 3 iterative stages of CA design, development, and evaluation and implementation, complemented by 2 cross-cutting themes (user-centered design and data privacy and security). OBJECTIVE: This study aims to perform in-depth, semistructured interviews with multidisciplinary experts in health care CAs to share their views on the definition and classification of health care CAs and evaluate and validate the DISCOVER conceptual framework. METHODS: We conducted one-on-one semistructured interviews via Zoom (Zoom Video Communications) with 12 multidisciplinary CA experts using an interview guide based on our framework. The interviews were audio recorded, transcribed by the research team, and analyzed using thematic analysis. RESULTS: Following participants' input, we defined CAs as digital interfaces that use natural language to engage in a synchronous dialogue using ≥1 communication modality, such as text, voice, images, or video. CAs were classified by 13 categories: response generation method, input and output modalities, CA purpose, deployment platform, CA development modality, appearance, length of interaction, type of CA-user interaction, dialogue initiation, communication style, CA personality, human support, and type of health care intervention. Experts considered that the conceptual framework could be adapted for artificial intelligence-based CAs. However, despite recent advances in artificial intelligence, including large language models, the technology is not able to ensure safety and reliability in health care settings. Finally, aligned with participants' feedback, we present an updated iteration of the conceptual framework for health care conversational agents (CHAT) with key considerations for CA design, development, and evaluation and implementation, complemented by 3 cross-cutting themes: ethics, user involvement, and data privacy and security. CONCLUSIONS: We present an expanded, validated CHAT and aim at guiding researchers from a variety of backgrounds and with different levels of expertise in the design, development, and evaluation and implementation of rule-based CAs in health care settings.


Asunto(s)
Inteligencia Artificial , Voz , Humanos , Reproducibilidad de los Resultados , Comunicación , Lenguaje
3.
J Med Internet Res ; 25: e45984, 2023 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-37463036

RESUMEN

BACKGROUND: Mental disorders cause substantial health-related burden worldwide. Mobile health interventions are increasingly being used to promote mental health and well-being, as they could improve access to treatment and reduce associated costs. Behavior change is an important feature of interventions aimed at improving mental health and well-being. There is a need to discern the active components that can promote behavior change in such interventions and ultimately improve users' mental health. OBJECTIVE: This study systematically identified mental health conversational agents (CAs) currently available in app stores and assessed the behavior change techniques (BCTs) used. We further described their main features, technical aspects, and quality in terms of engagement, functionality, esthetics, and information using the Mobile Application Rating Scale. METHODS: The search, selection, and assessment of apps were adapted from a systematic review methodology and included a search, 2 rounds of selection, and an evaluation following predefined criteria. We conducted a systematic app search of Apple's App Store and Google Play using 42matters. Apps with CAs in English that uploaded or updated from January 2020 and provided interventions aimed at improving mental health and well-being and the assessment or management of mental disorders were tested by at least 2 reviewers. The BCT taxonomy v1, a comprehensive list of 93 BCTs, was used to identify the specific behavior change components in CAs. RESULTS: We found 18 app-based mental health CAs. Most CAs had <1000 user ratings on both app stores (12/18, 67%) and targeted several conditions such as stress, anxiety, and depression (13/18, 72%). All CAs addressed >1 mental disorder. Most CAs (14/18, 78%) used cognitive behavioral therapy (CBT). Half (9/18, 50%) of the CAs identified were rule based (ie, only offered predetermined answers) and the other half (9/18, 50%) were artificial intelligence enhanced (ie, included open-ended questions). CAs used 48 different BCTs and included on average 15 (SD 8.77; range 4-30) BCTs. The most common BCTs were 3.3 "Social support (emotional)," 4.1 "Instructions for how to perform a behavior," 11.2 "Reduce negative emotions," and 6.1 "Demonstration of the behavior." One-third (5/14, 36%) of the CAs claiming to be CBT based did not include core CBT concepts. CONCLUSIONS: Mental health CAs mostly targeted various mental health issues such as stress, anxiety, and depression, reflecting a broad intervention focus. The most common BCTs identified serve to promote the self-management of mental disorders with few therapeutic elements. CA developers should consider the quality of information, user confidentiality, access, and emergency management when designing mental health CAs. Future research should assess the role of artificial intelligence in promoting behavior change within CAs and determine the choice of BCTs in evidence-based psychotherapies to enable systematic, consistent, and transparent development and evaluation of effective digital mental health interventions.


Asunto(s)
Aplicaciones Móviles , Automanejo , Telemedicina , Humanos , Salud Mental , Inteligencia Artificial , Terapia Conductista/métodos , Automanejo/métodos , Telemedicina/métodos
4.
BMJ Open ; 13(6): e068740, 2023 06 28.
Artículo en Inglés | MEDLINE | ID: mdl-37380211

RESUMEN

INTRODUCTION: Online multiple-choice question (MCQ) quizzes are popular in medical education due to their ease of access and ability for test-enhanced learning. However, a general lack of motivation among students often results in decreasing usage over time. We aim to address this limitation by developing Telegram Education for Surgical Learning and Application Gamified (TESLA-G), an online platform for surgical education that incorporates game elements into conventional MCQ quizzes. METHODS AND ANALYSIS: This online, pilot randomised control trial will be conducted over 2 weeks. Fifty full-time undergraduate medical students from a medical school in Singapore will be recruited and randomised into an intervention group (TESLA-G) and an active control group (non-gamified quizzing platform) with a 1:1 allocation ratio, stratified by year of study.We will evaluate TESLA-G in the area of endocrine surgery education. Our platform is designed based on Bloom's taxonomy of learning domains: questions are created in blocks of five questions per endocrine surgery topic, with each question corresponding to one level on Bloom's taxonomy. This structure promotes mastery while boosting student engagement and motivation. All questions are created by two board-certified general surgeons and one endocrinologist, and validated by the research team. The feasibility of this pilot study will be determined quantitatively by participant enrolment, participant retention and degree of completion of the quizzes. The acceptability of the intervention will be assessed quantitatively by a postintervention learner satisfaction survey consisting of a system satisfaction questionnaire and a content satisfaction questionnaire. The improvement of surgical knowledge will be assessed by comparing the scores of preintervention and postintervention knowledge tests, which consist of separately created questions on endocrine surgery. Retention of surgical knowledge will be measured using a follow-up knowledge test administered 2 weeks postintervention. Finally, qualitative feedback from participants regarding their experience will be obtained and thematically analysed. ETHICS AND DISSEMINATION: This research is approved by Singapore Nanyang Technological University (NTU) Institutional Review Boards (Reference Number: IRB-2021-732). All participants will be expected to read and sign a letter of informed consent before they are considered as recruited into the study. This study poses minimal risk to participants. Study results will be published in peer-reviewed open-access journals and presented in conference presentations. TRIAL REGISTRATION NUMBER: NCT05520671.


Asunto(s)
Estudiantes de Medicina , Humanos , Proyectos Piloto , Escolaridad , Aprendizaje , Motivación , Ensayos Clínicos Controlados Aleatorios como Asunto
5.
J Med Internet Res ; 25: e44548, 2023 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-37074762

RESUMEN

BACKGROUND: Rapid proliferation of mental health interventions delivered through conversational agents (CAs) calls for high-quality evidence to support their implementation and adoption. Selecting appropriate outcomes, instruments for measuring outcomes, and assessment methods are crucial for ensuring that interventions are evaluated effectively and with a high level of quality. OBJECTIVE: We aimed to identify the types of outcomes, outcome measurement instruments, and assessment methods used to assess the clinical, user experience, and technical outcomes in studies that evaluated the effectiveness of CA interventions for mental health. METHODS: We undertook a scoping review of the relevant literature to review the types of outcomes, outcome measurement instruments, and assessment methods in studies that evaluated the effectiveness of CA interventions for mental health. We performed a comprehensive search of electronic databases, including PubMed, Cochrane Central Register of Controlled Trials, Embase (Ovid), PsychINFO, and Web of Science, as well as Google Scholar and Google. We included experimental studies evaluating CA mental health interventions. The screening and data extraction were performed independently by 2 review authors in parallel. Descriptive and thematic analyses of the findings were performed. RESULTS: We included 32 studies that targeted the promotion of mental well-being (17/32, 53%) and the treatment and monitoring of mental health symptoms (21/32, 66%). The studies reported 203 outcome measurement instruments used to measure clinical outcomes (123/203, 60.6%), user experience outcomes (75/203, 36.9%), technical outcomes (2/203, 1.0%), and other outcomes (3/203, 1.5%). Most of the outcome measurement instruments were used in only 1 study (150/203, 73.9%) and were self-reported questionnaires (170/203, 83.7%), and most were delivered electronically via survey platforms (61/203, 30.0%). No validity evidence was cited for more than half of the outcome measurement instruments (107/203, 52.7%), which were largely created or adapted for the study in which they were used (95/107, 88.8%). CONCLUSIONS: The diversity of outcomes and the choice of outcome measurement instruments employed in studies on CAs for mental health point to the need for an established minimum core outcome set and greater use of validated instruments. Future studies should also capitalize on the affordances made available by CAs and smartphones to streamline the evaluation and reduce participants' input burden inherent to self-reporting.


Asunto(s)
Salud Mental , Evaluación de Resultado en la Atención de Salud , Humanos , Comunicación
6.
J Med Internet Res ; 25: e44542, 2023 03 20.
Artículo en Inglés | MEDLINE | ID: mdl-36939808

RESUMEN

BACKGROUND: Mental health interventions delivered through mobile health (mHealth) technologies can increase the access to mental health services, especially among university students. The development of mHealth intervention is complex and needs to be context sensitive. There is currently limited evidence on the perceptions, needs, and barriers related to these interventions in the Southeast Asian context. OBJECTIVE: This qualitative study aimed to explore the perception of university students and mental health supporters in Singapore about mental health services, campaigns, and mHealth interventions with a focus on conversational agent interventions for the prevention of common mental disorders such as anxiety and depression. METHODS: We conducted 6 web-based focus group discussions with 30 university students and one-to-one web-based interviews with 11 mental health supporters consisting of faculty members tasked with student pastoral care, a mental health first aider, counselors, psychologists, a clinical psychologist, and a psychiatrist. The qualitative analysis followed a reflexive thematic analysis framework. RESULTS: The following 6 main themes were identified: a healthy lifestyle as students, access to mental health services, the role of mental health promotion campaigns, preferred mHealth engagement features, factors that influence the adoption of mHealth interventions, and cultural relevance of mHealth interventions. The interpretation of our findings shows that students were reluctant to use mental health services because of the fear of stigma and a possible lack of confidentiality. CONCLUSIONS: Study participants viewed mHealth interventions for mental health as part of a blended intervention. They also felt that future mental health mHealth interventions should be more personalized and capable of managing adverse events such as suicidal ideation.


Asunto(s)
Trastornos Mentales , Telemedicina , Humanos , Singapur , Universidades , Trastornos Mentales/prevención & control , Estudiantes/psicología
7.
J Med Internet Res ; 24(10): e39243, 2022 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-36190749

RESUMEN

BACKGROUND: Conversational agents (CAs) are increasingly used in health care to deliver behavior change interventions. Their evaluation often includes categorizing the behavior change techniques (BCTs) using a classification system of which the BCT Taxonomy v1 (BCTTv1) is one of the most common. Previous studies have presented descriptive summaries of behavior change interventions delivered by CAs, but no in-depth study reporting the use of BCTs in these interventions has been published to date. OBJECTIVE: This review aims to describe behavior change interventions delivered by CAs and to identify the BCTs and theories guiding their design. METHODS: We searched PubMed, Embase, Cochrane's Central Register of Controlled Trials, and the first 10 pages of Google and Google Scholar in April 2021. We included primary, experimental studies evaluating a behavior change intervention delivered by a CA. BCTs coding followed the BCTTv1. Two independent reviewers selected the studies and extracted the data. Descriptive analysis and frequent itemset mining to identify BCT clusters were performed. RESULTS: We included 47 studies reporting on mental health (n=19, 40%), chronic disorders (n=14, 30%), and lifestyle change (n=14, 30%) interventions. There were 20/47 embodied CAs (43%) and 27/47 CAs (57%) represented a female character. Most CAs were rule based (34/47, 72%). Experimental interventions included 63 BCTs, (mean 9 BCTs; range 2-21 BCTs), while comparisons included 32 BCTs (mean 2 BCTs; range 2-17 BCTs). Most interventions included BCTs 4.1 "Instruction on how to perform a behavior" (34/47, 72%), 3.3 "Social support" (emotional; 27/47, 57%), and 1.2 "Problem solving" (24/47, 51%). A total of 12/47 studies (26%) were informed by a behavior change theory, mainly the Transtheoretical Model and the Social Cognitive Theory. Studies using the same behavior change theory included different BCTs. CONCLUSIONS: There is a need for the more explicit use of behavior change theories and improved reporting of BCTs in CA interventions to enhance the analysis of intervention effectiveness and improve the reproducibility of research.


Asunto(s)
Terapia Conductista , Apoyo Social , Terapia Conductista/métodos , Atención a la Salud , Femenino , Humanos , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...